The following text is not mine. I'm copy-translating a text a dear friend of mine just wrote in Spanish, in Facebook. He writes far better than I do (much better than most people I have known). I am not also a great translator. If you can read Spanish, go read the original.
I hate my country. I want to get the hell out of here. This country stinks.
Phrases that appear in talks between Mexicans since yesterday. On the network and outside of it. And to tell the truth, I would have put them between quotation marks if I had not thought them as well. At some point. Because that is the edtent of the pain. Enuogh to hate, to insult, to give up.
But we talk and write without realizing that it might be the most terrible thing in all this mess. That the pain makes us give up and consent to play a role in the game that they, the executioners, would pleasedly look at from their tribunes, laughing at us while they hand each other the popcorn. That would be over the line. So lets not give them that joy.
Because they surely don't realize we have the obligation to notice it from the very beginning and do something to avoid falling there: The root of the pain they caused us yesterday is because that's how the annihilation of hope feels like.
The shout "Alive they were taken" they do not realize but we do is a shout of hope. A pronouncement for the possible goodness in the human being. A testimony of hope in the future. A bet for life. And with his cold address, the federal attorney yesterday wanted to finish the killing of our already aching hope. We cannot grant him that joy.
They say it's the last thing that dies. I'd say it's the only thing that should not die. Ever. It finishes and everything finishes.
There is no possible justice for the parents of the 43. Much less for the 43. Not even however much the official discourse wants to gets us dizzy with the propaganda saying "we will not rest until". Not even if the president quits that would bring back to their classrooms even one of those that by today are just ashes. And sadly, that's the excuse that man wields to not stop boarding his plane and travel wherever he pleases. The farthest from Mexico, the better. Lets not do the same.
Lets remind the world this country is full of us, not of them. That the face of a persn is not the dirtyness on his forehead and cheeks, but the skin that's below, that feels and throbs. Lets show the world Mexico is more the verse than the blood, more the idea than the terror.
And to them...
Lets not give them the joy.
To them, lets make them see that, however hard they try, there are things they will never take from us.
Our love for this country, for example.
The country, over all things.
- Antonio Malpica. After what appears to be the bitter and sadly expected end of a sad, terrible, unbelievable collective social rupture we have lived for ~50 days.
And what comes next? How can it come? How can we expect it? I have no way to answer. We, the country's people, are broken.
I've just recently built the large bulk of VMs that we use for first semester
teaching. This year that was 112. We use the same general approach for these
as our others: get a generic base image up and running, with just enough
configuration complete so a puppet client starts up; get it talking to our
master; let puppet take it from there.
There are pragmatic balances between how much we do in the kickstart versus how
much we do in puppet, but also when we build a new VM from scratch versus when
we clone an existing image, and how specialisation we do in the clone image.
Unfortunately this year we ended up in a situation where our clone image
wouldn't talk to our puppet master out of the box, due to some changes we'd
made to our master set up since the clone image was prepared. We didn't really
have enough time to re-clone the entire set of VMs from a fixed base image, and
instead needed to fix them whilst up. However we couldn't rely on puppet to do
that, since they wouldn't talk to the puppet master.
We needed to manually reset the puppet client state per VM and then
re-establish a trust relationship with the correct master (which is not the
default master hostname in our environment anymore). Luckily, we deploy a local
account with a known passphrase via the kickstart, which also has sudo access,
as an interim measure before puppet strips it back out again and sets up proper
LDAP and Kerberos authentication. So we can at least get into the boxes. However
logging into 112 VMs by hand is not a particularly pleasant task.
In the past I might have tried to achieve this using something like
clusterssh but this year I
decided to give ansible a try instead.
Ansible started life, I believe, as a tool that would let you run arbitrary
commands on remote hosts, including navigating ssh and sudo as required,
without needing any agent software on the remote end. It has since seemed to
grow into an enterprise product in its own right, seemingly in competition with
puppet, chef, cfengine et al.
Looking at the Ansible website now I'd be rather put off by just how
"enterprisey" it has become - much as I am by the puppet website, if I'm honest -
but if you persevere past the webinars, testimonials, etc. etc., you can find
yourself to the documentation, and running an arbitrary command is as simple as
defining a list of hosts
running an ansible command line referencing some or all of those hosts
The hosts file format is simple
[somehosts]
host1
host2
...
[otherhosts]
host3
The command line can be a little bit more complex, especially if you need to
use one username for ssh, another for sudo, and you don't want to use ssh key
auth:
"all" would work where I've used somehosts in the example above.
So there you go: using one configuration management system to bootstrap
another. I'm sure I've reserved myself a special place in hell for this.
I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles.
Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd.
Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us.
And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics!
Some links with further information:
A little more than 2 years ago, the
Ceilometer project was launched inside
the OpenStack ecosystem. Its main objective was to measure OpenStack cloud
platforms in order to provide data and mechanisms for functionalities such
as billing, alarming or capacity planning.
In this article, I would like to relate what I've been doing with other
Ceilometer developers in the last 5 months. I've lowered my involvement in
Ceilometer itself directly to concentrate on solving one of its biggest
issue at the source, and I think it's largely time to take a break and talk
about it.
Ceilometer early design
For the last years, Ceilometer didn't change in its core architecture.
Without diving too much in all its parts, one of the early design decision
was to build the metering around a data structure we called samples. A
sample is generated each time Ceilometer measures something. It is composed
of a few fields, such as the the resource id that is metered, the user and
project id owning that resources, the meter name, the measured value, a
timestamp and a few free-form metadata. Each time Ceilometer measures
something, one of its components (an agent, a pollster ) constructs and
emits a sample headed for the storage component that we call the
collector.
This collector is responsible for storing the samples into a database. The
Ceilometer collector uses a pluggable storage system, meaning that you can
pick any database system you prefer. Our original implementation has been
based on MongoDB from the beginning, but we then added a SQL driver, and
people contributed things such as HBase or DB2 support.
The REST API exposed by Ceilometer allows to execute various reading
requests on this data store. It can returns you the list of resources that
have been measured for a particular project, or compute some statistics on
metrics. Allowing such a large panel of possibilities and having such a
flexible data structure allows to do a lot of different things with
Ceilometer, as you can almost query the data in any mean you want.
The scalability issue
We soon started to encounter scalability issues in many of the read requests
made via the REST API. A lot of the requests requires the data storage to do
full scans of all the stored samples. Indeed, the fact that the API allows
you to filter on any fields and also on the free-form metadata (meaning non
indexed key/values tuples) has a terrible cost in terms of performance (as
pointed before, the metadata are attached to each sample generated by
Ceilometer and is stored as is). That basically means that the sample data
structure is stored in most drivers in just one table or collection, in
order to be able to scan them at once, and there's no good "perfect"
sharding solution, making data storage scalability painful.
It turns out that the Ceilometer REST API is unable to handle most of the
requests in a timely manner as most operations are O(n) where n is the
number of samples recorded (see
big O notation if you're
unfamiliar with it). That number of samples can grow very rapidly in an
environment of thousands of metered nodes and with a data retention of
several weeks. There is a few optimizations to make things smoother in
general cases fortunately, but as soon as you run specific queries, the API
gets barely usable.
During this last year, as the Ceilometer PTL, I discovered these issues
first hand since a lot of people were feeding me back with this kind of
testimony. We engaged several blueprints to improve the situation, but it
was soon clear to me that this was not going to be enough anyway.
Thinking outside the box
Unfortunately, the PTL job doesn't leave him enough time to work on the
actual code nor to play with anything new. I was coping with most of the
project bureaucracy and I wasn't able to work on any good solution to tackle
the issue at its root. Still, I had a few ideas that I wanted to try and as
soon as I stepped down from the PTL role, I stopped working on Ceilometer
itself to try something new and to think a bit outside the box.
When one takes a look at what have been brought recently in Ceilometer, they
can see the idea that Ceilometer actually needs to handle 2 types of data:
events and metrics.
Events are data generated when something happens: an instance start, a
volume is attached, or an HTTP request is sent to an REST API server. These
are events that Ceilometer needs to collect and store. Most OpenStack
components are able to send such events using the notification system built
into oslo.messaging.
Metrics is what Ceilometer needs to store but that is not necessarily tied
to an event. Think about an instance CPU usage, a router network bandwidth
usage, the number of images that Glance is storing for you, etc These are
not events, since nothing is happening. These are facts, states we need to
meter.
Computing statistics for billing or capacity planning requires both of these
data sources, but they should be distinct. Based on that assumption, and the
fact that Ceilometer was getting support for storing events, I started to
focus on getting the metric part right.
I had been a system administrator for a decade before jumping into OpenStack
development, so I know a thing or two on how monitoring is done in this
area, and what kind of technology operators rely on. I also know that
there's still no silver bullet this made it a good challenge.
The first thing that came to my mind was to use some kind of time-series
database, and export its access via a REST API as we do in all OpenStack
services. This should cover the metric storage pretty well.
Cooking Gnocchi
At the end of April 2014, this led met to start a new project code-named
Gnocchi. For the record, the name was picked after confusing so many times
the OpenStack Marconi project, reading OpenStack Macaroni instead. At least
one OpenStack project should have a "pasta" name, right?
The point of having a new project and not send patches on Ceilometer, was
that first I had no clue if it was going to make something that would be any
better, and second, being able to iterate more rapidly without being
strongly coupled with the release process.
The first prototype started around the following idea: what you want is to
meter things. That means storing a list of tuples of (timestamp, value) for
it. I've named these things "entities", as no assumption are made on what
they are. An entity can represent the temperature in a room or the CPU usage
of an instance. The service shouldn't care and should be agnostic in this
regard.
One feature that we discussed for several OpenStack summits in the
Ceilometer sessions, was the idea of doing aggregation. Meaning, aggregating
samples over a period of time to only store a smaller amount of them. These
are things that time-series format such as the
RRDtool have been doing for a long time on
the fly, and I decided it was a good trail to follow.
I assumed that this was going to be a requirement when storing metrics into
Gnocchi. The user would need to provide what kind of archiving it would
need: 1 second precision over a day, 1 hour precision over a year, or even
both.
The first driver written to achieve that and store those metrics inside
Gnocchi was based on whisper. Whisper
is the file format used to store metrics for the
Graphite project. For the actual storage,
the driver uses Swift, which has the advantages to be part of OpenStack and
scalable.
Storing metrics for each entities in a different whisper file and putting
them in Swift turned out to have a fantastic algorithm complexity: it was
O(1). Indeed, the complexity needed to store and retrieve metrics doesn't
depends on the number of metrics you have nor on the number of things you
are metering. Which is already a huge win compared to the current Ceilometer
collector design.
However, it turned out that whisper has a few limitations that I was
unable to circumvent in any manner. I needed to patch it to remove a lot of
its assumption about manipulating file, or that everything is relative to
now (time.time()). I've started to hack on that in my own fork, but then
everything broke. The whisper project code base is, well, not the state of
the art, and have 0 unit test. I was starring at a huge effort to transform
whisper into the time-series format I wanted, without being sure I wasn't
going to break everything (remember, no test coverage).
I decided to take a break and look into alternatives, and stumbled upon
Pandas, a data manipulation and statistics
library for Python. Turns out that Pandas support time-series natively, and
that it could do a lot of the smart computation needed in Gnocchi. I built a
new file format leveraging Pandas for computing the time-series and named it
carbonara (a wink to both the
Carbon project and pasta, how
clever!). The code is quite small (a third of whisper's, 200 SLOC vs 600
SLOC), does not have many of the whisper limitations and it has test
coverage. These Carbonara files are then, in the same fashion, stored into
Swift containers.
Anyway, Gnocchi storage driver system is designed in the same spirit that
the rest of OpenStack and Ceilometer storage driver system. It's a plug-in
system with an API, so anyone can write their own driver. Eoghan Glynn has
already started to write a InfluxDB driver, working
closely with the upstream developer of that database. Dina Belova started to
write an OpenTSDB driver. This helps to make sure
the API is designed directly in the right way.
Handling resources
Measuring individual entities is great and needed, but you also need to link
them with resources. When measuring the temperature and the number of a
people in a room, it is useful to link these 2 separate entities to a
resource, in that case the room, and give a name to these relations, so one
is able to identify what attribute of the resource is actually measured. It
is also important to provide the possibility to store attributes on these
resources, such as their owners, the time they started and ended their
existence, etc.
Once this list of resource is collected, the next step is to list and filter
them, based on any criteria. One might want to retrieve the list of
resources created last week or the list of instances hosted on a particular
node right now.
Resources also need to be specialized. Some resources have attributes that
must be stored in order for filtering to be useful. Think about an instance
name or a router network.
All of these requirements led to to the design of what's called the
indexer. The indexer is responsible for indexing entities, resources, and
link them together. The initial implementation is based on
SQLAlchemy and should be pretty efficient. It's
easy enough to index the most requested attributes (columns), and they are
also correctly typed.
We plan to establish a model for all known OpenStack resources (instances,
volumes, networks, ) to store and index them into the Gnocchi indexer in
order to request them in an efficient way from one place. The generic
resource class can be used to handle generic resources that are not tied to
OpenStack. It'd be up to the users to store extra attributes.
Dropping the free form metadata we used to have in Ceilometer makes sure
that querying the indexer is going to be efficient and scalable.
REST API
All of this is exported via a REST API that was partially designed and
documented in the
Gnocchi specification in the Ceilometer repository;
though the spec is not up-to-date yet. We plan to auto-generate the
documentation from the code as we are currently doing in Ceilometer.
The REST API is pretty easy to use, and you can use it to manipulate
entities and resources, and request the information back.
Roadmap & Ceilometer integration
All of this plan has been exposed and discussed with the Ceilometer team
during the last
OpenStack summit in Atlanta
in May 2014, for the Juno release. I led a session about this entire
concept, and convinced the team that using Gnocchi for our metric storage
would be a good approach to solve the Ceilometer collector scalability
issue.
It was decided to conduct this project experiment in parallel of the current
Ceilometer collector for the time being, and see where that would lead the
project to.
Early benchmarks
Some engineers from Mirantis did a few benchmarks around Ceilometer and also
against an early version of Gnocchi, and Dina Belova presented them to us
during the mid-cycle sprint we organized in Paris in early July.
The following graph sums up pretty well the current Ceilometer performance
issue. The more you feed it with metrics, the more slow it becomes.
For Gnocchi, while the numbers themselves are not fantastic, what is
interesting is that all the graphs below show that the performances are
stable without correlation with the number of resources, entities or
measures. This proves that, indeed, most of the code is built around a
complexity of O(1), and not O(n) anymore.
Next steps
While the Juno cycle is being wrapped-up for most projects, including
Ceilometer, Gnocchi development is still ongoing. Fortunately, the composite
architecture of Ceilometer allows a lot of its features to be replaced by
some other code dynamically. That, for example, enables Gnocchi to provides
a Ceilometer dispatcher plugin for its collector, without having to ship the
actual code in Ceilometer itself. That should help the development of
Gnocchi to not be slowed down by the release process for now.
The Ceilometer team aims to provide Gnocchi as a sort of technology preview
with the Juno release, allowing it to be deployed along and plugged with
Ceilometer. We'll discuss how to integrate it in the project in a more
permanent and strong manner probably during the
OpenStack Summit for Kilo
that will take place next November in Paris.
I spent most of my high-school years in a small town called Bendigo in Australia. These days, I'm living in the centre of Europe, Switzerland.
Oddly enough, a more than trivial number of people in Bendigo are now trying to imitate one of the darkest moments in Switzerland's history, a crusade to prevent the construction of a mosque.
At least in Switzerland, they tried to be slightly diplomatic: the official question on the referendum was about banning minarets rather than a whole religion. The placards in the street were more explicit, with the silhouette of blackened minarets arranged to resemble a field of inter-continental ballistic missiles:
In Bendigo, however, the gloves are off. One councillor has already declared "I wouldn't want to live near a mosque. Would you?"
Will Bendigo ban the internet too?
In Australia and Britain, the press has been fascinated with the recent release of a Jihad video on Youtube created by Brits and Aussies fighting in Syria. Fear-mongering fanatics claim the mosque will bring jihadists to Bendigo. If they genuinely believe that, shouldn't they be pushing to censor or ban the internet too, so that the children of Bendigo won't get their hands on these recruitment videos?
Weeds will grow if nobody plucks them out
The fact is, most of these anti-Islam campaigners are nutcases or opportunists looking for a political career. Hundreds of millions of muslims worship their God in peace every day. As the referendum in Switzerland demonstrates, if good people do nothing, the nutcases will flourish like weeds. Only 30% of Swiss people voted to ban minarets, but with 47% of people not voting at all, the nutcases won. Citizens of Bendigo who value their human rights (which includes freedom of religion) would be wise to avoid complacency. Even though the mosque may now have council approval, sinister groups from around the whole country are now conspiring to overturn the decision or perhaps just make the town a focal point for their Islamophobia campaigns.
Somalia's chief exports appear to be morally-ambiguous Salon articles about piracy and sophomoric evidence against libertarianism. However, it is the former topic that Captain Phillips concerns itself with, inspired by the hijacking of the Maersk Alabama container ship in 2009.
What is truth? In the end, Captain Phillips does not rise above Pontius Pilate in providing an answer, but it certainly tries using more refined instruments than irony or leaden sarcasm.
This motif pervades the film. Obviously, it is based on a "true story" and brings aboard that baggage, but it also permeates the plot in a much deeper sense. For example, Phillips and the US Navy lie almost compulsively to the pirates, whilst the pirates only really lie once (where they put Phillips in greater danger).
Notice further that Phillips only starts to tell the truth when he thinks all hope is lost. These telling observations become even more fascinating when you realise that they must be based on the testimony of the, well, liars. Clearly, deception is a weapon to be monopolised and there are few limits on what good guys can or should lie about if they believe they can save lives.
Even Phillip's nickname ("Irish") is a falsehood he straight-up admits he is an American citizen.
Futhermore, there is an utterly disarming epilogue where Phillips is being treated for shock by clinical efficient medical staff. Not only will this scuttle any "blanket around the shoulders" clich but is probably a highly accurate portrayal of what actually happens post-trauma. This echoes the kind of truth Werner Herzog aims for in his filmmaking as well his guilt-inducing duality between uncomfortable lingering and compulsive viewing.
Lastly, a starter for a meta-discussion: can a film based on real-world events even be "spoilered"? Hearing headlines on the radio before you read your newspaper hardly robs you of a literary journey...
Captain Phillips does have some quotidian problems. Firstly, the only tool for ratcheting up tension is for the Somalians to launch verbal broadsides at the Americans, with each compromise somehow escalating the situation. This technique is effective but well before the climatic rescue scene where it is really needed it has been subject to the most extreme diminishing returns.
(I cannot be the first to notice the "Africans festooned with guns shouting incomphensively" trope I hope it is based on a Babel-esque mechanism of disorientation from miscommunication rather than anything more unsavoury.)
The racist idea that Africans prefer an AK-47 rotated about the Z-axis is socially constructed.
Secondly, the US Navy acts like a teacher with an Ofsted inspector observing quietly from the corner of the classroom; far too well-behaved it suspends belief, with no post-kill gloating or even the tiniest of post-arrest congratulations. Whilst nobody wants to see the Navy overreact badly to other military branches getting all the glory, nobody wants to see a suspiciously bland recruitment vehicle either. Paradoxically, this hermetic treatment made me unduly fascinated by them as if they were part of some military "uncanny valley". Two quick observations:
All US Somali interactions are recorded by a naval officer. No doubt a value-for-money defense against a USS Abu Ghraib, but knowing the plot is based on factual events, it was perhaps a little too Baudrillardian to ponder how the presence of the Navy's cameras in a scene actually lent weight to the film's version of events, crucially without me even knowing whether the parallel "real-life" footage is verifiable or not.
The navigational computers not only seem to require lines to drawn repeatedly between points of interest, but the Maersk Alabama's arbitrary relabelling as MOTHERSHIP seems to imply that an officer could humourously rename a radar contact to something unbecoming of a 12A classification.
The drone footage: I'd love to write an essay about how Call of Duty might have influenced (or even be) cinema.
Finally, despite the title, the film is actually about two captains; the skillful liar Phillips and ... well, that's the real problem. Whilst Captain Muse is certainly no caricatured Hook, we are offered little depth beyond a "You're not just a fisherman" faux-revelation that leads nowhere. I was left inventing reasons for his akrasia so that he made any sense whatsoever.
One could charitably argue that the film attempts to stay objective on Muse, but the inability for the film to take any obvious ethical stance actually seems to confuse and then compromise the narrative. What deeper truth is actually being revealed? Is this film or documentary?
Worse still, the moral vacuum is invariably filled by the viewer's existing political outlook: are Somali pirates victims of circumstance who are forced into (alas, regrettable) swashbuckling adventures to pacify plank-threatening warlords? Or are they violent and dangerous criminals who habour an irrational resentment against the West, flimsily represented by material goods in shipping containers?
Your improvised answer to this Rorschach test will always sit more haphazardly in the film than any pre-constructed treatment ever could.
6/10
Somalia's chief exports appear to be morally-ambiguous Salon articles about piracy and sophomoric evidence against libertarianism. However, it is the former topic that Captain Phillips concerns itself with, inspired by the hijacking of the Maersk Alabama container ship in 2009.
What is truth? In the end, Captain Phillips does not rise above Pontius Pilate in providing an answer, but it certainly tries using more refined instruments than irony or leaden sarcasm.
This motif pervades the film. Obviously, it is based on a "true story" and brings aboard all that well-travelled baggage, but it also permeates the plot in a much deeper sense. For example, Phillips and the US Navy lie almost compulsively to the pirates, whilst the pirates only really lie once where they put Phillips in greater danger.
Notice further that Phillips only starts to tell the truth when he thinks all hope is lost. These telling observations become even more fascinating when you realise that they must be based on the testimony of the, well, liars. Clearly, deception is a weapon to be monopolised and there are few limits on what good guys can or should lie about if they believe they can save lives.
Even Phillip's nickname ("Irish") is a falsehood he straight-up admits he is an American citizen.
Lastly, there is an utterly disarming epilogue where Phillips is being treated for shock by clinical efficient medical staff. Not only will it scuttle any "blanket around the shoulders" clich but is probably a highly accurate portrayal of what actually happens post-trauma. This echoes the kind of truth Werner Herzog often aims for in his filmmaking as well his guilt-inducing duality between uncomfortable lingering and compulsive viewing.
Another angle worthy of discussion: can a film based on real-world events even be "spoilered"? Hearing headlines on the before you read the newspaper hardly robs you of a literary journey...
Captain Phillips does have some quotidian problems. Firstly, the only tool for ratcheting up tension is for the Somalians to launch verbal broadsides at the Americans, with each compromise somehow escalating the situation further. This technique is genuinely effective but well before the climatic rescue scene where it is really needed it has been subject to the most extreme diminishing returns.
(I cannot be the first to notice the "Africans festooned with guns shouting incomphensively" trope I hope it is based on a Babel-esque mechanism of disorientation and miscommunication rather than anything, frankly, unsavoury.)
The racist idea that Africans prefer a AK-47 rotated about the Z-axis is socially constructed.
Secondly, the US Navy acts like a teacher with an Ofsted inspector observing quietly from the corner of the classroom; far too well-behaved it suspends belief, with no post-kill gloating or even the tiniest of post-arrest congratulations. Whilst nobody wants to see the Navy overreact badly to other military branches getting all the glory, nobody wants to see a suspiciously bland recruitment vehicle either. Paradoxically, this hermetic treatment made me unduly fascinated by them, as if they were part of some military "uncanny valley". Two quick observations:
All US Somali interactions are recorded by a naval officer. No doubt a value-for-money defense against a USS Abu Ghraib, but knowing the plot is based on a factual events, it was perhaps a little too Baudrillardian to ponder how the presence of the Navy's cameras in a scene actually lent weight to the film's version of events, crucially without me even knowing whether the parallel "real life" footage is verifiable or not.
The navigational computers not only seem to require lines to drawn repeatedly between points of interest, but the Maersk Alabama's arbitrary relabelling as MOTHERSHIP seems to imply that an officer could humourously rename a contact to something unbecoming of a 12A classification.
The drone footage: I'd love to read (or write) an essay about how Call of Duty might have influenced cinema.
Finally, despite the title, the film is actually about two captains; the skillful liar Phillips and ... well, that's the real problem. Whilst Captain Muse is certainly no caricatured Hook, we are offered little depth beyond a "You're not just a fisherman" faux-revelation that leads nowhere. I was left inventing reasons for his akrasia so that he made any sense whatsoever.
One could charitably argue that the film attempts to stay objective on Muse, but the inability for the film to take any obvious ethical stance actually seems to confuse and then compromise the narrative. What deeper truth is actually being revealed? Is this film or documentary?
Worse still, the moral vacuum is invariably filled by the viewer's existing political outlook: are Somali pirates victims of circumstance who are forced into (alas, regrettable) swashbuckling adventures to pacify plank-threatening warlords? Or are they violent and dangerous criminals who habour an irrational resentment against the West, flimsily represented by material goods in shipping containers?
Your improvised answer to this Rorschach test will always sit more haphazardly in the film than any pre-constructed treatment ever could.
6/10
Background I upgraded from Linux 3.8 to 3.11 among with newer Mesa, X.Org and Intel driver recently and I found a small workaround was needed because of upstream changes.
The upstream change was the Add "Automatic" mode for "Broadcast RGB" property, and defaulting to the Automatic. This is a sensible default, since many (most?) TVs default to the more limited 16-235, and continuing to default to Full from the driver side would mean wrong colors on the TV. I've set my screen to support the full 0-255 range available to not cut the amount of available shades of colors down.
Unfortunately it seems the Automatic setting does not work for my HDMI input, ie blacks become grey since the driver still outputs the more limited range. Maybe there could be something to improve on the driver side, but I'd guess it's more about my 2008 Sony TV actually having a mode that the standard suggests limited range for. I remember the TV did default to limited range, so maybe the EDID data from TV does not change when setting the RGB range to Full.
I hope the Automatic setting works to offer full range on newer screens and the modes they have, but that's probably up to the manufacturers and standards.
Below is an illustration of the correct setting on my Haswell CPU. When the Broadcast RGB is left to its default Automatic setting, the above image is displayed. When set to Full, the image below with deeper blacks is seen instead. I used manual settings on my camera so it's the same exposure.
Workaround For me the workaround has evolved to the following so far. Create a /etc/X11/Xsession.d/95fullrgb file:
if [ "$(/usr/bin/xrandr -q --prop grep 'Broadcast RGB: Full' wc -l)" = "0" ] ; then /usr/bin/xrandr --output HDMI3 --set "Broadcast RGB" "Full" fi
And since I'm using lightdm, adding the following to /etc/lightdm/lightdm.conf means the flicker only happens once during bootup:
Important: when using the LightDM setting, enable executable bits (chmod +x) to /etc/X11/Xsession.d/95fullrgb for it to work. Obviously also check your output, for me it was HDMI3.
If there is no situation where it'd set back to "Limited 16:235" setting on its own, the display manager script should be enough and having it in /etc/X11/Xsession.d is redundant and slows login time down. I think for me it maybe went from 2 seconds to 3 seconds since executing xrandr query is not cheap. Misc Note that unrelated to Full range usage, the Limited range at the moment behaves incorrectly on Haswell until the patch in bug #71769 is accepted. That means, the blacks are grey in Limited mode even if the screen is also set to Limited.
I'd prefer there would be a kernel parameter for the Broadcast RGB setting, although my Haswell machine does boot so fast I don't get to see too many seconds of wrong colors...
If you're running an Android device with GNU userland Linux in a chroot and need a full network access over USB cable (so that you can use your laptop/desktop machine's network connection from the device), here's a quick primer on how it can be set up.
When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.
I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.
On device, have eg. data/usb.sh with the following contents.
#!/system/xbin/sh CHROOT="/data/chroot"
ip addr add 192.168.137.2/30 dev usb0 ip link set usb0 up ip route delete default ip route add default via 192.168.137.1; setprop net.dns1 8.8.8.8 echo 'nameserver 8.8.8.8' >> $CHROOT/run/resolvconf/resolv.conf
This works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.
Update: Clarified that this is to forward the desktop/laptop's network connection to the device so that network is accessible from the device over USB.
PackagesI quite like the current status of Qt 5 in Debian and Ubuntu (the links are to the qtbase packages, there are ca. 15 other modules as well). Despite Qt 5 being bleeding edge and Ubuntu having had the need to use it before even the first stable release came out in December, the co-operation with Debian has gone well. Debian is now having the first Qt 5 uploads done to experimental and later on to unstable. My work contributed to pkg-kde git on the modules has been welcomed, and even though more work has been done there by others, there haven't been drastic changes that would cause too big transition problems on the Ubuntu side. It has of course helped to ask others what they want, like the whole usage of qtchooser. Now with Qt 5.0.2 I've been able to mostly re-sync all newer changes / fixes to my packaging from Debian to Ubuntu and vice versa.
There will remain some delta, as pkg-kde plans to ask for a complete transition to qtchooser so that all Qt using packages would declare the Qt version either by QT_SELECT environment variable (preferable) or a package dependency (qt5-default or qt4-default). As a temporary change related to that, Debian will have a debhelper modification that defaults QT_SELECT to qt4 for the duration of the transition. Meanwhile, Ubuntu already shipped the 13.04 release with Qt 5, and a shortcut was taken there instead to prevent any Qt 4 package breakage. However, after the transition period in Debian is over, that small delta can again be removed.
I will also need to continue pushing any useful packaging I do to Debian. I pushed qtimageformats and qtdoc last week, but I know I'm still behind with some "possibly interesting" git snapshot modules like qtsensors and qtpim.
PatchesMore delta exists in the form of multiple patches related to the recent Ubuntu Touch efforts. I do not think they are of immediate interest to Debian let's start packaging Qt 5 apps to Debian first. However, about all of those patches have already been upstreamed to be part of Qt 5.1 or Qt 5.2, or will be later on. Some already were for 5.0.2.
A couple of months ago Ubuntu did have some patches hanging around with no clear author information. This was a result of the heated preparation for the Ubuntu Touch launches, and the fact that patches flew (too) quickly in place into various PPA:s. I started hunting down the authors, and the situation turned out to be better than I thought. About half of the patches were already upstreamed, and work on properly upstreaming the other ones was swiftly started after my initial contact. Proper DEP3 fields do help understanding the overall situation. There are now 10 Canonical individuals in the upstream group of contributors, and in the last week's sprint it turned out more people will be joining them to upstream their future patches.
Nowadays about all the requests I get for including patches from developers are stuff that was already upstreamed, like the XEmbed support in qtbase. This is how it should be.
One big patch still being Ubuntu only is the Unity appmenu support. There was a temporary solution for 13.04 that forward-ported the Qt 4 way of doing it. This will be however removed from the first 13.10 ('saucy') upload, as it's not upstreamable (the old way of supporting Unity appmenus was deliberately dropped from Qt 5). A re-implementation via QPA plugin support is on its way, but it may be that the development version users will be without appmenu support for some duration. Another big patch is related to qtwebkit's device pixel ratio, which will need to be fixed. Apart from these two areas of work that need to be followed through, patches situation is quite nice as mentioned. ConclusionFree software will do world domination, and I'm happy to be part of it.
Update: This isn't actually that much better than letting them
access the private key, since nothing is stopping the user from
running their own SSH agent, which can be run under strace. A better
solution is in the works. Thanks Timo Juhani Lindfors and Bob Proulx
for both pointing this out.
At work, we have a shared SSH key between the different people
manning the support queue. So far, this has just been a file in a
directory where everybody could read it and people would sudo to the
support user and then run SSH.
This has bugged me a fair bit, since there was nothing stopping a
person from making a copy of the key onto their laptop, except policy.
Thanks to a tip, I got around to implementing this and figured writing
up how to do it would be useful.
First, you need a directory readable by root only, I use
/var/local/support-ssh here. The other bits you need are a small
sudo snippet and a profile.d script.
My sudo snippet looks like:
Everybody in group support can run ssh-add as root.
The profile.d goes in /etc/profile.d/support.sh and looks like:
if [ -n "$(groups grep -E "(^ )support( $)")" ]; then
export SSH_AUTH_ENV="$HOME/.ssh/agent-env"
if [ -f "$SSH_AUTH_ENV" ]; then
. "$SSH_AUTH_ENV"
fi
ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
mkdir -p "$HOME/.ssh"
rm -f "$SSH_AUTH_ENV"
ssh-agent > "$SSH_AUTH_ENV"
. "$SSH_AUTH_ENV"
fi
sudo ssh-add /var/local/support-ssh/id_rsa
fi
The key is unavailable for the user in question because ssh-add is
sgid and so runs with group ssh and the process is only debuggable for
root. The only thing missing is there's no way to have the agent
prompt to use a key and I would like it to die or at least unload keys
when the last session for a user is closed, but that doesn't seem
trivial to do.
I didn't want to spam Debian Planet with largely Ubuntu related post, but in general I think this is very relevant for Debian Mobile, and I close up the post with Debian :) http://losca.blogspot.fi/2013/03/i-want-products.html
I'm sitting at the Bella Sky lobby bar while UDS people keep pouring in. I guess I have to start this UDS with some hacking (and a little beer)! I bootstrapped an Ubuntu armhf rootfs and coupled it with QtMoko's kernel already earlier after I received my GTA04 but it didn't boot right away so I had nothing to report. I wanted armhf so I chose QtMoko's 'experimental' Debian armhf rootfs + boot files as the reference to look at while working on the Ubuntu rootfs. I now went through again some of the configuration files, and voil :
Now running apt-get install unity over SSH :) It will require OpenGL ES 2.0 hw acceleration to run, now that the support was integrated in Ubuntu 12.10. I will therefore need to tinker what kind of OMAP3 armhf binary blobs there are available, and what's again the situation with X.org DDX driver as well. I always feel that the fun starts at this point for me, when I've the device booting and I can SSH in. That's why I'm happy the work from Golden Delicious GmbH and QtMoko helped me to get here...
I believe I'm not the only one who thinks that use case oriented Grub2 documentation is hard to find, and a lot of the documentation is obsolete or wrong. My main cause for writing this blog post is a currently unanswered question regarding 2.00, but meanwhile it seems months have passed and still most 1.99 documentation is wrong as well, which might be interesting to some.
The aim is to prevent grub entries from being edited, while not restricting actual booting. This protection is meant for computers not having any confidential stuff, but just wanting to do some light weight security with the assumption that the computer isn't physically opened.
Common setup
You will obviously want to disable any automatically generated root access giving entries, by for example uncommenting GRUB_DISABLE_RECOVERY="true" in /etc/default/grub on Debian or Ubuntu. Also you would disable allowing any external boot devices to be used in BIOS/EFI/coreboot, which you would also have protected with a password. And that often means you need to also disable USB legacy support, since some BIOSes tend to offer all USB devices as bootable without password otherwise (note that I guess that could also cause problems accessing setup on desktop computers if your only keyboard is USB).
1.99
So to first fix the false instructions in variousplaces - no, setting the superuser in 00_header as instructed is not enough. It might be, but does not apply if eg. old kernels are put into submenu (Ubuntu bug 718670, Fedora bug 836259). The protection from editing does not apply there. And if you remove all but one kernel so that there is no submenu, a submenu will be automatically created when there is a new kernel installed via security updates. I didn't need the submenu feature anyway, so I used to comment out the following lines in /etc/grub.d/10_linux:
#if [ "$list" ] && ! $in_submenu; then #echo "submenu \"Previous Linux versions\" " #in_submenu=: #fi
...
#if $in_submenu; then # echo " " #fi
I hope that was useful. I can imagine it causing a couple of family battles if the commonly instructed setup was the only protection used and there's for example a case of two computer savvy siblings that are eager to get to each others' computers...
2.00 & The Question
The problem with 2.00 is that the superusers setup yields a non-bootable system, ie. password is required for booting. But Google wasn't smiling at me today! Terrible. Can you help me (and others) with 2.00? The aim would be to have a 1.99-like setup where superuser password protects all entries from editing, but booting is fine without any passwords.
Update: Thanks, problem solved, see comments! Find the following line in /etc/grub.d/10_linux:
echo "menuentry '$(echo "$os" grub_quote)' $ CLASS \$menuentry_id_option 'gnulinux-simple-$boot_device_id' " sed "s/^/$submenu_indentation/"
And add --unrestricted there. Don't mix the line with the another menuentry line two lines earlier. The submenu problem doesn't exist anymore in 2.00.
Thoughts about DRI.Next
On the way to the X Developer s Conference in Nuremberg, Eric and I
chatted about how the DRI2 extension wasn t really doing what we
wanted. We came up with some fairly rough ideas and even held an
informal presentation
about it.
We didn t have slides that day, having come up with the content for
the presentation in the hours just before the conference started. This
article is my attempt to capture both that discussion and further
conversations held over roast pork dinners that week.
A brief overview of DRI2
Here s a list of the three things that DRI2 currently offers.
Application authentication.
The current kernel DRM authentication mechanism restricts access to
the GPU to applications connected to the DRM master. DRI2 implements
this by having the application request the DRM cookie from the X
server which can then be passed to the kernel to gain access to the
device.
This is fairly important because once given access to the GPU, an
application can access any flink d global buffers in the system. Given
that the application sends screen data to the X server using flink d
buffers, that means all screen data is visible to any GPU-accessing
application. This bypasses any GPU hardware access controls.
Allocating buffers.
DRI2 defines a set of attachment points for buffers which can be
associated with an X drawable. An application needing a specific set
of buffers for a particular rendering operation makes a request of the
X server which allocates the buffers and passes back their flink names.
The server automatically allocates new buffers when window sizes
change, sending an event to the application so that it knows to
request the new buffers at some point in the future.
Presenting data to the user.
The original DRI2 protocol defined only the DRI2CopyRegion request
which copied data between the allocated buffers. SwapBuffers was
implemented by simply copy data from the back buffer to the front
buffer. This didn t provide any explicit control over frame
synchronization, so a new request, DRI2SwapBuffers, was added
to expose controls for that. This new request only deals with the
front and back buffers, and either copies from back to front or
exchanges those two buffers.
Along with DRI2SwapBuffers, there are new requests that wait for
various frame counters and expose those to GL applications through the
OMLsynccontrol extension
What s wrong with DRI2?
DRI2 fixed a lot of the problems present with the original DRI
extension, and made reliable 3D graphics on the Linux desktop
possible. However, in the four years since it was designed, we ve
learned a lot, and the graphics environment has become more complex.
Here s a short list of some DRI2 issues that we d like to see fixed.
InvalidateBuffers events. When the X window size changes, the
buffers created by the X server for rendering must change size to
match. The problem is that the client is presumably drawing to the
old buffers when the new ones are allocated. Delivering an event to
the client is supposed to make it possible for the client to keep
up, but the reality is that the event is delivered at some random
time to some random thread within the application. This leads to
general confusion within the application, and often results in a
damaged frame on the screen. Fortunately, applications tend to draw
their contents often, so the damaged frame only appears briefly.
No information about new back buffer contents. When a buffer swap
happens and the client learns about the new back buffer, the back
buffer contents are always undefined. For most applications, this
isn t a big deal as they re going to draw the whole
window. However, compositing managers really want to reduce
rendering by only repainting small damaged areas of the
window. Knowing what previous frame contents are present in the
back buffer allows the compositing manager to repaint just the
affected area.
Un-purgable stale buffers. Between the X server finishing with a
buffer and the client picking it up for a future frame, we don t
need to save the buffer contents and should mark the buffer as
purgable. With the current DRI2 protocols, this can t be done,
which leaves all of those buffers hanging around in memory.
Driver-specific buffers. The DRI2 buffer handles are device
specific, and so we can t use buffers from other devices on the
screen. External video encoders/cameras/encoders can t be used with
the DRI2 extension.
GEM flink has lots of issues. The flink names are global, allowing
anyone with access to the device to access the flink data
contents. There is also no reference to the underlying object, so
the X server and client must carefully hold references to GEM
objects during various operations.
Proposed changes for DRI.Next
Given the three basic DRI2 operations (authentication, allocation,
presentation), how can those be improved?
Eliminate DRI/DRM magic-cookie based authentication
Kristian H gsberg, Martin Peres, Timoth e Ravier & Daniel Vetter
gave a talk on DRM2 authentication
at XDC this year that outlined the problems with the current DRM
access control model and proposed some fairly simple solutions,
including using separate device nodes one for access to the GPU
execution environment and a separate, more tightly controlled one, for
access to the display engine.
Combining that with the elimination of flink for communicating data
between applications and there isn t a need for the current
magic-cookie based authentication mechanism; simple file permissions
should suffice to control access to the GPU.
Of course, this ignores the whole memory protection issue when running
on a GPU that doesn t provide access control, but we already have that
problem today, and this doesn t change that, other than to eliminate
the global uncontrolled flink namespace.
Allocate all buffers in the application
DRI2 does buffer allocation in the X server. This ensures that that
multiple (presumably cooperating) applications drawing to the same
window will see the same buffers, as is required by the GLX
extension. We suspected that this wasn t all that necessary, and it
turns out to have been broken several years ago. This is the
traditional way in X to phase out undesirable code, and provides an
excellent opportunity to revisit the original design.
Doing buffer allocations within the client has several
benefits:
No longer need DRI2 additions to manage new GL buffers. Adding HiZ
to the intel driver required new DRI2 code in the X server, even
though X wasn t doing anything with those buffers at all.
Eliminate some X round trips currently required for GL buffer
allocation.
Knowing what s in each buffer. Because the client allocates each
buffer, it can track the contents of them.
Size tracking is trivial. The application sends the GL the
of the viewport, and the union of all viewports should be the same
as the size of the window (or there will be undefined contents on
the screen). The driver can use the viewport information to
size the buffers and ensure that every frame on the screen is
complete.
Present buffers through DMA-buf
The new DMA-buf infrastructure provides a cross-driver/cross-process
mechanism for sharing blobs of data. DMA-buf provides a way to take a
chunk of memory used by one driver and pass it to another. It also
allows applications to create file descriptors that reference these
objects.
For our purposes, it s the file descriptor which is immediately
useful. This provides a reliable and secure way to pass a reference
from an underlying graphics buffer from the client to the X
server by sending the file descriptor over the local X socket.
An additional benefit is that we get automatic integration of data
from other devices in the system, like video decoders or non-primary
GPUs. The Prime support added in DRI version 2.8 hacks around this
by sticking a driver identifier in the driverType value.
Once the buffer is available to the X server, we can create a request
much like the current DRI2SwapBuffers request, except instead of
implicitly naming the back and front buffers, we can pass an
arbitrary buffer and have those contents copied or swapped to the
drawable.
We also need a way to copy a region into the drawable. I don t know if
that needs the same level of swap control, but it seems like it would
be nice. Perhaps the new SwapBuffers request could take a region and
offset as well, copying data when swapping isn t possible.
Managing buffer allocations
One trivial way to use this new buffer allocation mechanism would be
to have applications allocate a buffer, pass it to the X server and
then simply drop their reference to it. The X server would keep
a reference until the buffer was no longer in use, at which point the
buffer memory would be reclaimed.
However, this would eliminate a key optimization in current drivers
the ability to re-use buffers instead of freeing and allocating new
ones. Re-using buffers takes advantage of the work necessary to setup
the buffer, including constructing page tables, allocating GPU memory
space and flushing caches.
Notifying the application of idle buffers
Once the X server is finished using a buffer, it needs to notify the
application so that the buffer can be re-used. We could send these
notifications in X events, but that ends up in the twisty mess of X
client event handling which has already caused so much pain with
Invalidate events. The obvious alternative is to send them back in a
reply. That nicely controls where the data are delivered, but causes
the application to block waiting for the X server to send the reply.
Fortunately, applications already want to block when swapping buffers
so that they get throttled to the swap buffers rate. That is currently
done by having them wait for the DRI2SwapBuffers reply. This provides
a nice place to stick the idle buffer data. We can simply list buffers
which have become idle since the last SwapBuffers reply was delivered.
Releasing buffer memory
Applications which update only infrequently end up with a back buffer
allocated after their last frame which can t be freed by the
system. The fix for this is to mark the buffer purgable, but that can
only be done after all users of the buffer are finished with it.
With this new buffer management model, the application effectively
passes ownership of its buffers to the X server, and the X server
knows when all use of the buffer are finished. It could mark buffers
as purgable at that point. When the buffer was sent back in the
SwapBuffers reply, the application would be able to ask the kernel to
mark it un-purgable again.
A new extension? Or just a new DRI2 version?
If we eliminate the authentication model and replace the buffer
allocation and presentation interfaces, what of the existing DRI2
protocol remains useful? The only remaining bits are the other
synchronization requests: DRI2GetMSC, DRI2WaitMSC, DRI2WaitSBC and
DRI2SwapInterval.
Given this, does it make more sense to leave DRI2 as it is and plan on
deprecating, and eventually eliminating, it?
Doing so would place a support burden on existing applications, as
they d need to have code to use the right extension for the common
requests. They ll already need to support two separate buffer
management versions though, so perhaps this burden isn t that onerous?
When Google launched Google+, a lot of people were very sceptical. Some
outright claimed it to be useless. I must admit, it has a number of functions
that really rock.
Google Plus is not a Facebook clone. It does not try to mimick
Facebook that much. To me, it looks much more like a blog thing. A
blog system, where everybody has to have a Google account, and then can
comment (plus, you can then restrict access and share only with some people).
It also encourages you to share shorter posts. Successful blogs always
tried to make their posts "articles". Now the posts themselves are merely
comments; but not as crazy short as Twitter (it is not a Twitter clone either),
and it does have rich media contents, too.
Those who expect it to replace their Facebook where the interaction
is all about personal stuff will be somewhat disappointed. Because it IMHO
much less encourages the smalltalk type of interaction.
However, it won a couple of pretty high profile people to share their
thoughts and web discoveries with the world. Some of the most active users
I follow on Google Plus are:
Linus Torvalds and
Tim O'Reilly (of the
publishing house O'Reilly)
Of course I also have a number of friends that share private stuff on
Google Plus. But in my opinion the strength of Google Plus is on sharing
publicly. Since Google is the king of search, they can both feed shares
of your friends into your regular search results, but there is also a pretty
interesting search in Google PLus. The key difference is that with
this search, the focus is on what is new. Regular web search is also
a lot about searching for old things (where you did not bother to remember
the address or bookmark the site - and mind it, today a lot of people even
"google for Google" ...) For example I like the
plus search for data mining
because it occasionally has some interesting links in it. A lot of the stuff
is coming in again and again, but using the "j and k" keys, I can quickly
scroll through these results to see if there is anything interesting. And
there are quite a lot of interesting things I've discovered this way.
Note that this can change anytime. And maybe it is because I'm interested
in technology stuff that it works well for me. But say, maybe you are more
into HDR photography than
me (I think they look unreal, as if someone has done way too much contrast and
edge enhancing on the image). But go there, and press "j" a number of times to
browse through some HDR shots. That is a pretty neat search function there.
And if you come back tomorrow, there will likely be new images!
Facebook tried to clone this functionality. Google+ launched in June 2011,
and in September 2011, Facebook added "subscribers". So they realized the
need for having "non-friends" that are interested in what you are doing. Yet,
I don't know anybody actually using it. And the
Public posts search is much less interesting than of Google Plus, and the
nice keyboard navigation is also missing.
Don't get me wrong, Facebook still has its uses. When I travel, Facebook
is great for me to get into contact with locals to go swing dancing. There
are a number of events where people only invite you on Facebook (and that is
one of the reasons why I've missed a number of events - because I don't use
Facebook that much). But mind it, a lot of the stuff that people share on
Facebook is also really boring.
And that will actually be the big challenge for Google: keeping the
search results interesting. Once you have millions of people there sharing
pictures of lolcats - will it still return good results? Or will just about
every search give you more lolcats?
And of course, spam. The SEO crowd is just warming up in exploring the
benefits of Google Plus. And there are quite some benefits to be gained from
connecting web pages to Google Plus, as this will make your search results
stick out somehow, or maybe give them that little extra edge over other
results. But just like Facebook at some point was so heavily spammed when
every little shop was setting up his Facebook pages, inviting everyone to
all the events and so on - this is bound to happen on Google Plus, too. We'll
see how Google then reacts, and how quickly and effectively.
Do you ever have those afternoons you get a great idea and you've all the evening time for that task. The task is a relaxing one and won't need much attention, and you can watch a movie or something. But then, it happens that the evening turns into night as you realize a couple of little details adding complexity to the idea, and the task turns out to be much more invasive to your evening than you thought?
In this example, I got the great idea to upgrade my Debian running NAS device (thanks Martin for everything!) to use ext4 instead of ext3. The kind of idea that takes a long time for relatively little practical benefit, but it just feels like a nice thing do when you've the extra amounts of nerd time available. It's basically just opening the NAS device up, mounting its hard disk to a laptop via external case, running the tune2fs and fsck then putting the disk back. It just takes a long time for the initial fsck (to make sure everything's intact) and then the required fsck run to get ext4 mountable.
Only in this situation, it would have been beneficial to have the ext4 support in the flashed initramfs before the migration. So... before the photo below, I've already:
done the ext4 migration and fsck:s
screwed the disk back to the NAS case, attached cables and found that it doesn't boot
(on the laptop with the hard disk attached again tried manually unpacking initramfs and adding ext4 module... also had time to bind mount everything and chroot into the ARM system to run update-initramfs manually... also tried booting with those... until remembered the simple fact that the /boot partition is only for show and also the initramfs is loaded directly from flash)
copied the main root filesystem content from the original disk to another external disk with ext3 partition
attached the another disk (with same UUID:s) to the QNAP NAS device, booted, double-checked that I have now ext4 specified under /etc/initramfs-tools/modules, reconfigured the linux image that also regenerates initramfs and flashes it
And in the photo, what's happening is that:
I've again the original disk reattached and system booted with the initramfs generated and flashed from the ext3 disk
the NAS device is hanging in the air, cover open, from the closet where I've things stuffed in (normally secured with cable ties), and I need to support it with a knee or one hand since the 2TB disk is much heavier than the small SSD I used as the ext3 disk so the power cable and RJ-45 cable would have pretty heavy load
Since I've only one hand in use and can't use a laptop, I'm logging in via my Nokia N9 and then reflashing the kernel + initramfs from this original disk, just to make sure everything is now alright and also after that flashing it still boots (it does!). Note that I feel like the setup is secure enough for non-interrupted flashing so that I can indeed support the NAS with a knee, use one hand to keep N9 and another hand to take a photo with a camera.
And so we have had a productive and educating afternoon/evening/night once again. Does this ever happen to you?